Piecewise Differentiable Minimization for Ill-posed Inverse Problems Ing from the National Science Foundation and Ibm Corporation, with Additional Support from New York State and Members of Its Corporate Research Institute. 1
نویسندگان
چکیده
Based on minimizing a piecewise diierentiable lp function subject to a single inequality constraint, this paper discusses algorithms for a discretized regularization problem for ill-posed inverse problems. We examine computational challenges of solving this regularization problem. Possible minimization algorithms such as the steepest descent method, iteratively weighted least squares (IRLS) method and a recent globally convergent aane scaling Newton approach are considered. Limitations and eeciency of these algorithms are demonstrated using the geophysical traveltime tomographic inversion and image restoration applications. 1. Minimization and Ill-posed Inverse Problems. Minimization algorithms have long been used in regulating an ill-posed inverse problem. Assuming that a desired property of a solution is known a priori, an ill-posed inverse problem can be regulated by solving a constrained minimization problem. In particular, properties expressed in nondiierentiable form have increasingly been found more appropriate in many applications. Discretization of such a regularization problem often leads to minimizing a large-scale piecewise diierentiable function with a single constraint. In this paper, we consider regularization using piecewise diierentiable minimization , possibly with a single inequality constraint. Consider an ill-posed inverse problem, where A is an operator in a Hilbert space. Assume that k k 2 denotes the Euclidean norm and an a priori condition (e.g., continuity and bounded-ness) of the desired solution is given by kBuk 2 for some linear operator This paper is written for the proceedings of the workshop Large-Scale Optimiza
منابع مشابه
Piecewise Differentiable Minimization for Ill-posed Inverse Problems
Based on minimizing a piecewise differentiable lp function subject to a single inequality constraint, this paper discusses algorithms for a discretized regularization problem for ill-posed inverse problems. We examine computational challenges of solving this regularization problem. Possible minimization algorithms such as the steepest descent method, iteratively weighted least squares (IRLS) me...
متن کاملOverhead on the Intel Paragon , Ibm Sp 2 & Meiko Cs - 2
Interprocessor communication overhead is a crucial measure of the power of parallel computing systems|its impact can severely limit the performance of parallel programs. This report presents measurements of communication overhead on three contemporary commercial multicomputer systems: the Intel Paragon, the IBM SP2 and the Meiko CS-2. In each case the time to communicate between processors is p...
متن کاملIll-Posed and Linear Inverse Problems
In this paper ill-posed linear inverse problems that arises in many applications is considered. The instability of special kind of these problems and it's relation to the kernel, is described. For finding a stable solution to these problems we need some kind of regularization that is presented. The results have been applied for a singular equation.
متن کاملComputer Science and Artificial Intelligence Laboratory Some Properties of Empirical Risk Minimization Over Donsker Classes
We study properties of algorithms which minimize (or almost-minimize) empirical error over a Donsker class of functions. We show that the L2-diameter of the set of almost-minimizers is converging to zero in probability. Therefore, as the number of samples grows, it is becoming unlikely that adding a point (or a number of points) to the training set will result in a large jump (in L2 distance) t...
متن کاملSome Properties of Empirical Risk Minimization Over Donsker Classes
We study properties of algorithms which minimize (or almost-minimize) empirical error over a Donsker class of functions. We show that the L2-diameter of the set of almost-minimizers is converging to zero in probability. Therefore, as the number of samples grows, it is becoming unlikely that adding a point (or a number of points) to the training set will result in a large jump (in L2 distance) t...
متن کامل